Empirical Estimates in Stochastic Optimization: Special Cases
نویسنده
چکیده
Classical” optimization problems depending on a probability measure (and corresponding to many applications) belong mostly to a class of nonlinear deterministic optimization problems that are (from the numerical point of view) relatively complicated. On the other hand, these problems fulfill very often “suitable” mathematical properties guaranteing the stability (w.r.t probability measure) and moreover giving a possibility to replace the “underlying” probability measure by an empirical one to obtain “good” stochastic estimates of the optimal value and the optimal solution. Properties of these (empirical) estimates have been studied mostly for standard types of the “underlying” probability measures with suitable (thin) tails and independent random samples. However, it is known that probability distributions with heavy tails correspond to many economic problems better (see e.g. [18]) and, moreover, many applications do not correspond to the “classical” above mentioned problems. The aim of the paper is, first, to recall stability results in the case of heavy tails, furthermore, to introduce more general problems for which above mentioned results can be employed too.
منابع مشابه
Coupling Adaptive Batch Sizes with Learning Rates
Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance o...
متن کاملStochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure
Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochas...
متن کاملEmpirical Estimates in Stochastic Optimization via Distribution Tails
“Classical” optimization problems depending on a probability measure belong mostly to nonlinear deterministic optimization problems that are, from the numerical point of view, relatively complicated. On the other hand, these problems fulfil very often assumptions giving a possibility to replace the “underlying” probability measure by an empirical one to obtain “good” empirical estimates of the ...
متن کاملApplication of Stochastic Optimal Control, Game Theory and Information Fusion for Cyber Defense Modelling
The present paper addresses an effective cyber defense model by applying information fusion based game theoretical approaches. In the present paper, we are trying to improve previous models by applying stochastic optimal control and robust optimization techniques. Jump processes are applied to model different and complex situations in cyber games. Applying jump processes we propose some m...
متن کاملA Markov Chain Monte Carlo Method for Global Optimization using Non-reversible, Stochastic Acceptance Probabilities
In this paper, we present a novel Markov Chain Monte Carlo framework for solving global optimization problems in the continuous domain. At each iterate, our algorithm uses a globally reaching Markov kernel to generate a candidate point in the feasible region. This candidate point is then accepted according to a possibly non-reversible acceptance probability. We derive sufficient conditions on t...
متن کامل